24 research outputs found
Layered graphical models for tracking partially-occluded moving objects in video (PhD thesis)
Tracking multiple targets using fixed cameras with non-overlapping views is a challenging problem. One of the challenges is predicting and tracking through occlusions caused by other targets or by fixed objects in the scene. Considerable effort has been devoted toward developing appearance models that are robust to partial occlusions, tracking algorithms that cope with short-term loss of observations, and algorithms that learn static occlusion maps. In this thesis we consider scenarios where it is impossible to learn a static occlusion map. This is often the case when the scene consists of both people and large objects whose position is not permanently fixed. These objects may enter, leave or relocate within the scene during a short time span. We call such objects "relocatable objects" or "relocatable occluders."
We develop a representation for scenes containing relocatable objects that can cause partial occlusions of people in a camera's field of view. In many practical applications, relocatable objects tend to appear often; therefore, models for them can be learned off-line and stored in a database. We formulate an occluder-centric representation, called a graphical model layer, where a person's motion in the ground plane is defined as a first-order Markov process on activity zones, while image evidence is aggregated in 2D observation regions that are depth-ordered with respect to the occlusion mask of the relocatable object. We represent real-world scenes as a composition of depth-ordered, interacting graphical model layers, and account for image evidence in a way that handles mutual overlap of the observation regions and their occlusions by the relocatable objects. These layers interact: proximate ground plane zones of different model instances are linked to allow a person to move between the layers, and image evidence is shared between the observation regions of these models.
We demonstrate our formulation in tracking low-resolution, partially-occluded pedestrians in the vicinity of parked vehicles. In these scenarios some tracking formulations that rely on part-based person detectors may fail completely. Our pedestrian tracker fares well and compares favorably with the state-of-the-art pedestrian detectors---lowering false positives by twenty-nine percent and false negatives by forty-two percent---and a deformable-contour--based tracker
Learning to separate: detecting heavily-occluded objects in urban scenes
While visual object detection with deep learning has received much attention in the past decade, cases when heavy intra-class occlusions occur have not been studied thoroughly. In this work, we propose a Non-Maximum-Suppression (NMS) algorithm that dramatically improves the detection recall while maintaining high precision in scenes with heavy occlusions. Our NMS algorithm is derived from a novel embedding mechanism, in which the semantic and geometric features of the detected boxes are jointly exploited. The embedding makes it possible to determine whether two heavily-overlapping boxes belong to the same object in the physical world. Our approach is particularly useful for car detection and pedestrian detection in urban scenes where occlusions often happen. We show the effectiveness of our approach by creating a model called SG-Det (short for Semantics and Geometry Detection) and testing SG-Det on two widely-adopted datasets, KITTI and CityPersons for which it achieves state-of-the-art performance. Our code is available at https://github.com/ChenhongyiYang/SG-NMS.https://www.ecva.net/papers/eccv_2020/papers_ECCV/papers/123630511.pd
Take your Eyes off the Ball: Improving Ball-Tracking by Focusing on Team Play
Accurate video-based ball tracking in team sports is important for automated game analysis, and has proven very difficult because the ball is often occluded by the players. In this paper, we propose a novel approach to addressing this issue by formulating the tracking in terms of deciding which player, if any, is in possession of the ball at any given time. This is very different from standard approaches that first attempt to track the ball and only then to assign possession. We will show that our method substantially increases performance when applied to long basketball and soccer sequences
ZeroWaste Dataset: Towards Deformable Object Segmentation in Extreme Clutter
Less than 35% of recyclable waste is being actually recycled in the US, which
leads to increased soil and sea pollution and is one of the major concerns of
environmental researchers as well as the common public. At the heart of the
problem are the inefficiencies of the waste sorting process (separating paper,
plastic, metal, glass, etc.) due to the extremely complex and cluttered nature
of the waste stream. Automated waste detection has great potential to enable
more efficient, reliable, and safe waste sorting practices, but it requires
label-efficient detection of deformable objects in extremely cluttered scenes.
This challenging computer vision task currently lacks suitable datasets or
methods in the available literature. In this paper, we take a step towards
computer-aided waste detection and present the first in-the-wild
industrial-grade waste detection and segmentation dataset, ZeroWaste. This
dataset contains over 1800 fully segmented video frames collected from a real
waste sorting plant along with waste material labels for training and
evaluation of the segmentation methods, as well as over 6000 unlabeled frames
that can be further used for semi-supervised and self-supervised learning
techniques, as well as frames of the conveyor belt before and after the sorting
process, comprising a novel setup that can be used for weakly-supervised
segmentation. Our experimental results demonstrate that state-of-the-art
segmentation methods struggle to correctly detect and classify target objects
which suggests the challenging nature of our proposed real-world task of
fine-grained object detection in cluttered scenes. We believe that ZeroWaste
will catalyze research in object detection and semantic segmentation in extreme
clutter as well as applications in the recycling domain.
Our project page can be found at http://ai.bu.edu/zerowaste/